14

Introduction

TABLE 1.3

Results reported in Liu et al. [148].

Dataset

Index

SiamFC

XNOR

RB-SF

GOT-10K

AO

0.348

0.251

0.327

SR

0.383

0.230

0.343

OTB50

Precision

0.761

0.457

0.706

SR

0.556

0.323

0.496

OTB100

Precision

0.808

0.541

0.786

SR

0.602

0.394

0.572

UAV123

Precision

0.745

0.547

0.688

SR

0.528

0.374

0.497

Liu et al. [148] experiment on object tracking after proposing RBCNs. They used the

SiamFC network as the backbone for object tracking and binarized the SiamFC as the

Rectified Binary Convolutional SiamFC Network (RB-SF). They evaluated RBSF in four

datasets, GOT-10K [94], OTB50 [250], OTB100 [251], and UAV123 [177], using accuracy

occupy (AO) and success rate (SR). The results are shown in Table 1.3.

Yang et al. [269] propose a new method to optimize a deep neural network based on

YOLO-based object tracking simultaneously using approximate weight binarization, train-

able threshold group binarization activation function, and separable convolution methods

according to depth, significantly reducing the complexity of computation and model size.

1.2.4

Applications

Other applications include face recognition and face alignment. Face recognition: Liu et al.

[160] apply Weight Binarization Cascade Convolution Neural Network to eye localization, a

face recognition field. BNNs here help reduce the storage size of the model, as well as speed

up calculation.

Face Alignment: Bulat et al. [25] test their method on three challenging datasets for

significant pose face alignment: AFLW [121], AFLW-PIFA [108], and AFLW2000-3D [302],

reporting in many cases state-of-the-art performance.

1.3

Our Works on BNNs

We have designed several BNNs and 1-bit CNNs. MCN [236] was our first work, in which we

introduced modulation filters to approximate unbinarized filters in the end-to-end frame-

work. Based on MCN, we introduce projection convolutional neural networks (PCNNs) [77]

with discrete backpropagation via projection. Similarly to PCNN, our CBCNs [149] aims

to improve backpropagation by improving the representation ability based on a circular

backpropagation method. On the other hand, our RBCN [148] and BONN [287] improve

the training of new models by changing the loss function and the optimization process.

RBCNs introduce GAN, while BONNs are based on Bayesian learning. Recurrent bilinear

optimization for binary neural networks (RBONNs) is introduced to investigate the relation-

ship between full-precision parameters and their binary counterparts. This is implemented

by controlling the backpropagation process, where the sparse real-valued parameters are

backtracked to wait for other parameters well trained to their full performance. Resilient

Binary Neural Networks (ReBNNs) are introduced to mitigate the gradient oscillation prob-